Goto

Collaborating Authors

 core ml


FedKit: Enabling Cross-Platform Federated Learning for Android and iOS

He, Sichang, Tang, Beilong, Zhang, Boyan, Shao, Jiaoqi, Ouyang, Xiaomin, Nugraha, Daniel Nata, Luo, Bing

arXiv.org Artificial Intelligence

We present FedKit, a federated learning (FL) system tailored for cross-platform FL research on Android and iOS devices. FedKit pipelines cross-platform FL development by enabling model conversion, hardware-accelerated training, and cross-platform model aggregation. Our FL workflow supports flexible machine learning operations (MLOps) in production, facilitating continuous model delivery and training. We have deployed FedKit in a real-world use case for health data analysis on university campuses, demonstrating its effectiveness. FedKit is open-source at https://github.com/FedCampus/FedKit.


Apple's Artificial Intelligence and Machine Learning are more advanced than many believe

#artificialintelligence

Apple has been working of Artificial Intelligence and Machine Learning for decades, but longtime Apple analyst Tim Bajarin believes that Apple is more cautious about publicly touting its own AI prowess in light of the recent controversies surrounding Microsoft's Bing AI and Google's Bard ChatGPT competitor. From a historical standpoint, Apple began showing off early AI models when they introduced their futuristic Knowledge Navigator in 1987 And by 1990, Apple started a significant speech recognition project under Kaifu Lee who today is one the top researchers and experts in AI. Of course, Apple's Siri employs modern-day AI and advanced machine learning to deliver answers to spoken questions or requests and is at the heart of Apple Maps. Apple not jumping into the ChatGPT fray now is reasonable, given the current arrows aimed at Microsoft and Google's AI ChatGPT solutions. Although Bing's ChatGPT and Google's Bard are excellent products with great potential, it was clear that researchers and savvy media would poke holes in its capabilities and make those failures the headlines.


Senior Applied Machine Learning Engineer - Core ML(Growth) at Earnin - United States

#artificialintelligence

As one of the first pioneers of earned wage access, our passion at Earnin is building products that deliver real time financial flexibility for those with the unique needs of living paycheck to paycheck. Our community members access their earnings as they earn them, with options to spend, save, and grow their money without mandatory fees, interest rates, or credit checks. Since our founding, our app has been downloaded over 13M times and we have provided access to $10 billion in earnings. We're fortunate to have an incredibly experienced leadership team, combined with world-class funding partners like A16Z, Matrix Partners, DST, and a very healthy core business with a tremendous runway. We're growing fast and are excited to continue bringing world class talent onboard to help shape the next chapter of our growth journey. As a Fintech company where Machine Learning (ML) is one of the key features, our operations highly rely on machine learning models, from business decisions to customer experiences.


Stable Diffusion with Core ML on Apple Silicon - Apple Machine Learning Research

#artificialintelligence

Today, we are excited to release optimizations to Core ML for Stable Diffusion in macOS 13.1 and iOS 16.2, along with code to get started with deploying to Apple Silicon devices. Since its public debut in August 2022, Stable Diffusion has been adopted by a vibrant community of artists, developers and hobbyists alike, enabling the creation of unprecedented visual content with as little as a text prompt. In response, the community has built an expansive ecosystem of extensions and tools around this core technology in a matter of weeks. There are already methods that personalize Stable Diffusion, extend it to languages other than English, and more, thanks to open-source projects like Hugging Face diffusers. Beyond image generation from text prompts, developers are also discovering other creative uses for Stable Diffusion, such as image editing, in-painting, out-painting, super-resolution, style transfer and even color palette generation.


Stable Diffusion with Core ML on Apple Silicon

#artificialintelligence

An increasing number of the machine learning (ML) models we build at Apple each year are either partly or fully adopting the Transformer …


Core ML

#artificialintelligence

CoreML is Apple's almost new machine learning framework that enables us to make our apps more intelligent. ML is simultaneously a problem as well as a solution. It's simply a field of study that allows computers to learn without being explicitly programmed. It's like computer has a teacher and in a handholding way it is told step by step what is right and what is wrong and … An example of this is teaching a computer what is a cat. The training data is always and clearly labeled and is fed to the machine learning model and the machine learns through this data and begins to be able to do the classification that is needed. And we can us our testing data to fortunately get the desired output.


Consistency of Bilinear Upsampling Layer

#artificialintelligence

It is well known among deep-learning manias that bilinear upsampling layers in TensorFlow have pixel-offset issues. This has been partly fixed by adding an'align_corner' attribute to them in TensorFlow 2.x. But the problem remains to cause inconsistent computation flow when exporting a trained model in TensorFlow into another DL framework through various versions. In my case, a neural network model with bilinear upsampling layers showed weird behavior when converting the trained model from TensorFlow 2.5 to Apple Core ML by using coremltools 3.4. After uncountable coding, trials, and delete-delete-delete, I nearly gave up the consistent results of the upsampling layer between TensorFlow and Core ML.


An Empirical Study on Deployment Faults of Deep Learning Based Mobile Applications

Chen, Zhenpeng, Yao, Huihan, Lou, Yiling, Cao, Yanbin, Liu, Yuanqiang, Wang, Haoyu, Liu, Xuanzhe

arXiv.org Artificial Intelligence

Deep Learning (DL) is finding its way into a growing number of mobile software applications. These software applications, named as DL based mobile applications (abbreviated as mobile DL apps) integrate DL models trained using large-scale data with DL programs. A DL program encodes the structure of a desirable DL model and the process by which the model is trained using training data. Due to the increasing dependency of current mobile apps on DL, software engineering (SE) for mobile DL apps has become important. However, existing efforts in SE research community mainly focus on the development of DL models and extensively analyze faults in DL programs. In contrast, faults related to the deployment of DL models on mobile devices (named as deployment faults of mobile DL apps) have not been well studied. Since mobile DL apps have been used by billions of end users daily for various purposes including for safety-critical scenarios, characterizing their deployment faults is of enormous importance. To fill the knowledge gap, this paper presents the first comprehensive study on the deployment faults of mobile DL apps. We identify 304 real deployment faults from Stack Overflow and GitHub, two commonly used data sources for studying software faults. Based on the identified faults, we construct a fine-granularity taxonomy consisting of 23 categories regarding to fault symptoms and distill common fix strategies for different fault types. Furthermore, we suggest actionable implications and research avenues that could further facilitate the deployment of DL models on mobile devices.


On device Machine Learning in iOS using Core ML,Swift,Neural Engine

#artificialintelligence

Core ML is a Machine Learning Library launched by Apple in WWDC 2017. It allows iOS developers to add real-time, personalized experiences with industry-leading, on-device machine learning models in their apps by using Neural Engine. Apple introduced A11 Bionic Chip with Neural Engine on September 12, 2017. This neural network hardware can perform up to 600 Basic Operations per Second(BOPS) and is used for FaceID, Animoji and other Machine Learning tasks. Developers can take advantage of the neural engine by using Core ML API.


PyTorch 1.3 Release Adds Support for Mobile, Privacy, and Transparency

#artificialintelligence

Facebook recently announced the release of PyTorch 1.3. The latest version of the open-source deep learning framework includes new tools for mobile, quantization, privacy, and transparency. Engineering director Lin Qiao took the stage at the recent PyTorch Developer Conference in San Francisco to highlight new features in the release, framing them with PyTorch's core principles of developer efficiency and building for scale. For building at scale, the release introduces new model quantization capabilities as well as support for mobile platforms and tensor-processing units (TPUs). Developer efficiency tools include tools for model transparency and data privacy.